首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   285篇
  免费   54篇
  国内免费   10篇
基础医学   42篇
口腔科学   1篇
临床医学   13篇
内科学   11篇
神经病学   24篇
特种医学   23篇
外科学   11篇
综合类   43篇
预防医学   100篇
眼科学   14篇
药学   28篇
中国医学   31篇
肿瘤学   8篇
  2023年   5篇
  2022年   3篇
  2021年   10篇
  2020年   11篇
  2019年   22篇
  2018年   13篇
  2017年   10篇
  2016年   16篇
  2015年   20篇
  2014年   29篇
  2013年   32篇
  2012年   20篇
  2011年   20篇
  2010年   16篇
  2009年   10篇
  2008年   12篇
  2007年   15篇
  2006年   19篇
  2005年   14篇
  2004年   7篇
  2003年   5篇
  2002年   6篇
  2001年   2篇
  2000年   7篇
  1999年   1篇
  1998年   2篇
  1997年   3篇
  1996年   4篇
  1993年   2篇
  1991年   3篇
  1989年   1篇
  1988年   3篇
  1987年   1篇
  1986年   1篇
  1985年   2篇
  1983年   1篇
  1976年   1篇
排序方式: 共有349条查询结果,搜索用时 15 毫秒
71.
Based on the recently-developed sum-of-exponential (SOE) approximation, in this article, we propose a fast algorithm to evaluate the one-dimensional convolution potential $φ(x)=K∗ρ=∫^1_{0}K(x−y)ρ(y)dy$ at (non)uniformly distributed target grid points {$x_i$}$^M_{i=1}$, where the kernel $K(x)$ might be singular at the origin and the source density function $ρ(x)$ is given on a source grid ${{{y_i}}}^N_{j=1}$ which can be different from the target grid. It achieves an optimal accuracy, inherited from the interpolation of the density $ρ(x)$, within $\mathcal{O}(M+N)$ operations. Using the kernel's SOE approximation $K_{ES}$, the potential is split into two integrals: the exponential convolution $φ_{ES}$=$K_{ES}∗ρ$ and the local correction integral $φ_{cor}=(K−K_{ES})∗ρ$. The exponential convolution is evaluated via the recurrence formula that is typical of the exponential function. The local correction integral is restricted to a small neighborhood of the target point where the kernel singularity is considered. Rigorous estimates of the optimal accuracy are provided. The algorithm is ideal for parallelization and favors easy extensions to complicated kernels. Extensive numerical results for different kernels are presented.  相似文献   
72.
Errors-in-variables (EIV) regression is widely used in econometric models. The statistical analysis becomes challenging when the regression function is discontinuous and the distribution of measurement error is unknown. In the literature, most existing jump regression methods either assume that there is no measurement error involved or require that jumps are explicitly detected before the regression function can be estimated. In some applications, however, the ultimate goal is to estimate the regression function and to preserve the jumps in the process of estimation. In this paper, we are concerned with reconstructing jump regression curve from data that involve measurement error. We propose a direct jump-preserving method that does not explicitly detect jumps. The challenge of restoring jump structure masked by measurement error is handled by local clustering. Theoretical analysis shows that the proposed curve estimator is statistically consistent. A numerical comparison with an existing jump regression method highlights its jump-preserving property. Finally, we demonstrate our method by an application to a health tax policy study in Australia.  相似文献   
73.
74.

Purpose

Therapy response evaluation in oncological patient care requires reproducible and accurate image evaluation. Today, common standard in measurement of tumour growth or shrinkage is one-dimensional RECIST 1.1. A proposed alternative method for therapy monitoring is computer aided volumetric analysis. In lung metastases volumetry proved high reliability and accuracy in experimental studies. High reliability and accuracy of volumetry in lung metastases has been proven. However, other metastatic lesions such as enlarged lymph nodes are far more challenging. The aim of this study was to investigate the reproducibility of semi-automated volumetric analysis of lymph node metastases as a function of both slice thickness and reconstruction kernel. In addition, manual long axis diameters (LAD) as well as short axis diameters (SAD) were compared to automated RECIST measurements.

Materials and methods

Multislice-CT of the chest, abdomen and pelvis of 15 patients with lymph node metastases of malignant melanoma were included. Raw data were reconstructed using different slice thicknesses (1–5 mm) and varying reconstruction kernels (B20f, B40f, B60f). Volume and RECIST measurements were performed for 85 lymph nodes between 10 and 60 mm using Oncology Prototype Software (Fraunhofer MEVIS, Siemens, Germany) and were compared to a defined reference volume and diameter by calculating absolute percentage errors (APE). Variability of the lymph node sizes was computed as relative measurement differences, precision of measurements was computed as relative measurement deviation.

Results

Mean absolute percentage error (APE) for volumetric analysis varied between 3.95% and 13.8% and increased significantly with slice thickness. Differences between reconstruction kernels were not significant, however, a trend towards middle soft tissue kernel could be observed.. Between automated and manual short axis diameter (SAD, RECIST 1.1) and long axis diameter (LAD, RECIST 1.0) no significant differences were found. The most unsatisfactory segmentation results occurred in higher slice thickness (3 and 5 mm) and sharp tissue kernel.

Conclusion

Volumetric analysis of lymph nodes works satisfying in a clinical setting. Thin slice reconstructions (≤3 mm) and a middle soft tissue reconstruction kernel are recommended. LAD and SAD did not show significant differences regarding APE. Automated RECIST measurement showed lower APE than manual measurement in trend.  相似文献   
75.
76.
Genetic studies of complex diseases often collect multiple phenotypes relevant to the disorders. As these phenotypes can be correlated and share common genetic mechanisms, jointly analyzing these traits may bring more power to detect genes influencing individual or multiple phenotypes. Given the advancement brought by the multivariate phenotype approaches and the multimarker kernel machine regression, we construct a multivariate regression based on kernel machine to facilitate the joint evaluation of multimarker effects on multiple phenotypes. The kernel machine serves as a powerful dimension‐reduction tool to capture complex effects among markers. The multivariate framework incorporates the potentially correlated multidimensional phenotypic information and accommodates common or different environmental covariates for each trait. We derive the multivariate kernel machine test based on a score‐like statistic, and conduct simulations to evaluate the validity and efficacy of the method. We also study the performance of the commonly adapted strategies for kernel machine analysis on multiple phenotypes, including the multiple univariate kernel machine tests with original phenotypes or with their principal components. Our results suggest that none of these approaches has the uniformly best power, and the optimal test depends on the magnitude of the phenotype correlation and the effect patterns. However, the multivariate test retains to be a reasonable approach when the multiple phenotypes have none or mild correlations, and gives the best power once the correlation becomes stronger or when there exist genes that affect more than one phenotype. We illustrate the utility of the multivariate kernel machine method through the Clinical Antipsychotic Trails of Intervention Effectiveness antibody study.  相似文献   
77.
Genome‐wide association studies (GWAS) are a popular approach for identifying common genetic variants and epistatic effects associated with a disease phenotype. The traditional statistical analysis of such GWAS attempts to assess the association between each individual single‐nucleotide polymorphism (SNP) and the observed phenotype. Recently, kernel machine‐based tests for association between a SNP set (e.g., SNPs in a gene) and the disease phenotype have been proposed as a useful alternative to the traditional individual‐SNP approach, and allow for flexible modeling of the potentially complicated joint SNP effects in a SNP set while adjusting for covariates. We extend the kernel machine framework to accommodate related subjects from multiple independent families, and provide a score‐based variance component test for assessing the association of a given SNP set with a continuous phenotype, while adjusting for additional covariates and accounting for within‐family correlation. We illustrate the proposed method using simulation studies and an application to genetic data from the Genetic Epidemiology Network of Arteriopathy (GENOA) study.  相似文献   
78.
Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods.  相似文献   
79.
The clinical recognition of drug–drug interactions (DDIs) is a crucial issue for both patient safety and health care cost control. Thus there is an urgent need that DDIs be extracted automatically from biomedical literature by text-mining techniques. Although the top-ranking DDIs systems explore various features of texts, these features can’t yet adequately express long and complicated sentences. In this paper, we present an effective graph kernel which makes full use of different types of contexts to identify DDIs from biomedical literature. In our approach, the relations among long-range words, in addition to close-range words, are obtained by the graph representation of a parsed sentence. Context vectors of a vertex, an iterative vectorial representation of all labeled nodes adjacent and nonadjacent to it, adequately capture the direct and indirect substructures’ information. Furthermore, the graph kernel considering the distance between context vectors is used to detect DDIs. Experimental results on the DDIExtraction 2013 corpus show that our system achieves the best detection and classification performance (F-score) of DDIs (81.8 and 68.4, respectively). Especially for the Medline-2013 dataset, our system outperforms the top-ranking DDIs systems by F-scores of 10.7 and 12.2 in detection and classification, respectively.  相似文献   
80.
In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号